Goto

Collaborating Authors

 human cognition



Learning Neural Representations of Human Cognition across Many fMRI Studies

Neural Information Processing Systems

Cognitive neuroscience is enjoying rapid increase in extensive public brain-imaging datasets. It opens the door to large-scale statistical models. Finding a unified perspective for all available data calls for scalable and automated solutions to an old challenge: how to aggregate heterogeneous information on brain function into a universal cognitive system that relates mental operations/cognitive processes/psychological tasks to brain networks? We cast this challenge in a machine-learning approach to predict conditions from statistical brain maps across different studies. For this, we leverage multi-task learning and multi-scale dimension reduction to learn low-dimensional representations of brain images that carry cognitive information and can be robustly associated with psychological stimuli. Our multi-dataset classification model achieves the best prediction performance on several large reference datasets, compared to models without cognitive-aware low-dimension representations; it brings a substantial performance boost to the analysis of small datasets, and can be introspected to identify universal template cognitive concepts.



Approximating the Mathematical Structure of Psychodynamics

Bagley, Bryce-Allen, Khoshnan, Navin

arXiv.org Artificial Intelligence

The complexity of human cognition has meant that psychology makes more use of theory and conceptual models than perhaps any other biomedical field. To enable precise quantitative study of the full breadth of phenomena in psychological and psychiatric medicine as well as cognitive aspects of AI safety, there is a need for a mathematical formulation which is both mathematically precise and equally accessible to experts from numerous fields. In this paper we formalize human psychodynamics via the diagrammatic framework of process theory, describe its key properties, and explain the links between a diagrammatic representation and central concepts in analysis of cognitive processes in contexts such as psychotherapy, neurotechnology, AI alignment, AI agent representation of individuals in autonomous negotiations, developing human-like AI systems, and other aspects of AI safety.


The normalization of (almost) everything: Our minds can get used to anything, and even crises start feeling normal Science

Science

For a long time, many climate scientists and advocates held onto an optimistic belief that once the impacts of climate change became undeniable, people and governments would act. But whereas the predictions of climate models have increasingly borne out, the assumptions about human behavior have not. Even as disasters mount, climate change remains low on voters' priority lists, and policy responses remain tepid. To me, this gap reflects a deeper failure--not just in policy or communication, but in how we understand human adaptability. When I began my career as a computational cognitive scientist, I was drawn to a defining strength of human cognition--a marked ability to adapt.


The Man Who Invented AGI

WIRED

Everyone is obsessed with artificial general intelligence--the stage when AI can match all feats of human cognition. The guy who named it saw it as a threat. In the summer of 1956, a group of academics--now we'd call them computer scientists but there was no such thing then--met on Dartmouth College campus in New Hampshire to discuss how to make machines think like humans. One of them, John McCarthy, coined the term "artificial intelligence." This legendary meeting and the naming of a new field, is well known.


Toward Carbon-Neutral Human AI: Rethinking Data, Computation, and Learning Paradigms for Sustainable Intelligence

Santosh, KC, Rizk, Rodrigue, Wang, Longwei

arXiv.org Artificial Intelligence

Abstract--The rapid advancement of Artificial Intelligence (AI) has led to unprecedented computational demands, raising significant environmental and ethical concerns. We introduce a novel framework, Human AI (HAI), which emphasizes incremental learning, carbon-aware optimization, and human-in-the-loop collaboration to enhance adaptability, efficiency, and accountability. By drawing parallels with biological cognition and leveraging dynamic architectures, HAI seeks to balance performance with ecological responsibility. We detail the theoretical foundations, system design, and operational principles that enable AI to learn continuously and contextually while minimizing carbon footprints and human annotation costs. I. Introduction Artificial Intelligence (AI) has undergone unprecedented growth in the past decade, with state-of-the-art models achieving remarkable breakthroughs across domains such as natural language processing, computer vision, drug discovery, and climate modeling. However, this rapid progress comes at a substantial environmental cost. While the current AI paradigm largely emphasizes scale, i.e., more data, bigger models, and higher compute budgets, emerging research suggests that more sustainable solutions/paths are not only possible but necessary. In particular, the reliance on large, indiscriminately collected datasets is increasingly being challenged. Moreover, the COVID-19 pandemic, for example, underscored the need for agile learning systems capable of adapting rapidly to limited, evolving data.


Can AI Expand the Human Mind?

Communications of the ACM

Membership in ACM includes a subscription to Communications of the ACM (CACM), the computing industry's most trusted source for staying connected to the world of advanced computing. Can AI Expand the Human Mind? LLMs could represent a new layer of human cognition, which researchers call'System 0.' Giuseppe Riva first started to think about the role that artificial intelligence (AI) can play in human cognition when he and a colleague were trying to find someplace to have dinner in Los Angeles. Both pulled out their phones and started perusing Google Maps for suggestions of nearby restaurants. Riva quickly noticed that the list of possibilities on his phone was very different from what his companion was seeing.


How important is language for human-like intelligence?

Lupyan, Gary, Gentry, Hunter, Zettersten, Martin

arXiv.org Artificial Intelligence

We use language to communicate our thoughts. But is language merely the expression of thoughts, which are themselves produced by other, nonlinguistic parts of our minds? Or does language play a more transformative role in human cognition, allowing us to have thoughts that we otherwise could (or would) not have? Recent developments in artificial intelligence (AI) and cognitive science have reinvigorated this old question. We argue that language may hold the key to the emergence of both more general AI systems and central aspects of human intelligence. We highlight two related properties of language that make it such a powerful tool for developing domain--general abilities. First, language offers compact representations that make it easier to represent and reason about many abstract concepts (e.g., exact numerosity). Second, these compressed representations are the iterated output of collective minds. In learning a language, we learn a treasure trove of culturally evolved abstractions. Taken together, these properties mean that a sufficiently powerful learning system exposed to language--whether biological or artificial--learns a compressed model of the world, reverse engineering many of the conceptual and causal structures that support human (and human-like) thought.


Invisible Architectures of Thought: Toward a New Science of AI as Cognitive Infrastructure

Riva, Giuseppe

arXiv.org Artificial Intelligence

Contemporary human - AI interaction research currently faces a significant limitation: existing frameworks are inadequate to explain how artificial intelligence systems fundamentally reshape human cognition before conscious awareness occurs. This preprocessi ng influence, which operates beneath the threshold of deliberate thought, represents a crucial missing layer in the understanding of distributed cognition. This paper introduces "Cognitive Infrastructure Studies" (CIS) as a new interdisciplinary domain to reconceptualize AI s as "cognitive infrastructures" . C ognitive infrastructures - e.g., search engines, recommender systems, algorithmic curation platforms, and large language models - exhibit classic infrastructural properties: they are invisible in normal operation, becoming visible only upon breakdown; they are embedded in social and technical arrangements; they are learned as part of membership in digital communities; they link with conventions of practice; and they embody standards that shape what counts as appropriate, relevant, or true . Yet cognitive infrastructures possess distinctive characteristics that distinguish them from traditional infrastructures. Unlike physical infrastru ctures that passively transport matter or energy, cognitive infrastructures have agency, filtering and curating individuals' perception of reality before it reaches human consciousness. Through narrative scenarios spanning individual (cognitive dependency), collective (democratic deliberation), and societal (governance) scales, we describe how cognitive infrastructures reshape human cognition, public reasoning, and social epistemologies. CIS also provides methodological innovations for studying invisible algorithmic influence: " infrastructure breakdown methodologies ", experimental approaches that reveal cognitive dependencies by systematically withdrawing AI preprocessing after periods of habituation.